Adversarial Examples: Opportunities and Challenges
نویسندگان
چکیده
منابع مشابه
Islam; Challenges and Opportunities
Undoubtedly Islamic Revolution of Iran has been the source of Islamic awakening in the middle-east. Therefore Islamic Republic of Iran plays the role of an example and a director of political developments in the region. Islamic awakening in Arab countries has been able to question the dominant discourse of dictators in the middle-east region and topple their regimes. These developments include ...
متن کاملGenerating Adversarial Examples with Adversarial Networks
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires mor...
متن کاملSemantic Adversarial Examples
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is...
متن کاملSpatially Transformed Adversarial Examples
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the Lp distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of Lp distance as...
متن کاملSynthesizing Robust Adversarial Examples
Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Neural Networks and Learning Systems
سال: 2019
ISSN: 2162-237X,2162-2388
DOI: 10.1109/tnnls.2019.2933524